13 research outputs found

    Achieving MAX-MIN Fair Cross-efficiency scores in Data Envelopment Analysis

    Get PDF
    Algorithmic decision making is gaining popularity in today's business. The need for fast, accurate, and complex decisions forces decision-makers to take advantage of algorithms. However, algorithms can create unwanted bias or undesired consequences that can be averted. In this paper, we propose a MAX-MIN fair cross-efficiency data envelopment analysis (DEA) model that solves the problem of high variance cross-efficiency scores. The MAX-MIN cross-efficiency procedure is in accordance with John Rawls’s Theory of justice by allowing efficiency and cross-efficiency estimation such that the greatest benefit of the least-advantaged decision making unit is achieved. The proposed mathematical model is tested on a healthcare related dataset. The results suggest that the proposed method solves several issues of cross-efficiency scores. First, it enables full rankings by having the ability to discriminate between the efficiency scores of DMUs. Second, the variance of cross-efficiency scores is reduced, and finally, fairness is introduced through optimization of the minimal efficiency scores

    Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics

    Get PDF
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the goal of predicting their efficiency. In the first step, we assessed the project efficiency using Data Envelopment Analysis of 67 experiments conducted in the HEP laboratory Fermilab. In the second step, we employed predictive algorithms to detect which team structures maximize the epistemic performance of an expert group. For this purpose, we used the efficiency scores obtained by DEA and applied predictive algorithms – lasso and ridge linear regression, neural network, and gradient boosted trees – on them. The results of the predictive analyses show moderately high accuracy, indicating that they can be beneficial as one of the steps in grant review. Still, their applicability in practice should be approached with caution. Some of the limitations of the algorithmic approach are the unreliability of citation patterns, unobservable variables that influence scientific success, and the potential predictability of the model

    When Should We Stop Investing in a Scientific Project? The Halting Problem in Experimental Physics

    Get PDF
    The question of when to stop an unsuccessful experiment can be difficult to answer from an individual perspective. To help to guide these decisions, we turn to the social epistemology of science and investigate knowledge inquisition within a group. We focused on the expensive and lengthy experiments in high energy physics, which were suitable for citation-based analysis because of the relatively quick and reliable consensus about the importance of results in the field. In particular, we tested whether the time spent on a scientific project correlates with the project output. Our results are based on data from the high energy physics laboratory Fermilab. They point out that there is an epistemic saturation point in experimenting, after which the likelihood of obtaining major results drops. With time the number of less significant publications does increase, but highly cited ones do not get published. Since many projects continue to run after the epistemic saturation point, it becomes clearer that decisions made about continuing them are not always rational

    SOLVING FIRST ORDER DIFFERENTIAL EQUATIONS WITH GENETIC ALGORITHMS

    Get PDF
    U radu su predstavljene dve metode za rešavanje Košijevog problema običnih diferencijalnih jednačina prvog reda. Metode su bazirane na rešavanju običnih diferencijalnih jednačina prvog reda korišćenjem genetskih algoritama (GA). Metode su međusobno upoređene sa različitim načinama sparivanja populacije. Pored toga data su poređenja GA sa najprostijim i najčešće primenjivanim metodama za rešavanje običnih diferencijalnih jednačina. Pokazuje se da GA daju zadovoljavajuće vrednosti rešenja diferencijalnih jednačina i da su efikasniji od određenih numeričkih metoda. Runge Kuta metod pokazuje najbolje vrednosti aproksimacije rešenja, dok Ojlerov metod sa korakom 0,1 pokazuje veće vrednosti relativnih grešaka aproksimativnih rešenja u odnosu na GA. Bez obzira na to primena GA je vrlo ograničena s obzirom na vreme izvršenja istih koje je nekoliko 1000 puta veće u odnosu na preostale metode.In this paper two different methods for solving Cauchy problem of first order differential equations are preseneted. Methods are based on implementation of genetic algorithms. Results of both methods are compared with the commonly used techniques for solving differential equations. It is shown that methods based on genetic algorithms achieved satisfactory results and better performances compared to Eulers method. 5th order Runge Kutta method obtained best approximation of real results, whereas Euler method with step 0,1 achieved the worst performances. Neverthless it is shown that application of genetic algorithms in solving first order differential equations is limited due to high computational costs

    PREDICTION OF TRAFFIC INTENSITY AT PAY TOLL STATIONS

    Get PDF
    U radu je predstavljen metod za predviđanje intenziteta saobraćaja na sistemu za naplatu putarine za različit broj unapred određnih naplatnih rampi koje će biti otvorene. Sistem za naplatu putarine je predstavljen kao više jednokanalnih sistema ospluživanja, gde jedan kanal predstavlja jednu naplatnu rampu. Razvijanjem metodologije zasnovane na predviđanju parametra opsluživanja za unapred zadati broj otvorenih kanala, kombinacijom neuronskih mreža i modela masovnog opsluživanja evaluirane su verovatnoće stanja sistema za naplatu putarine i ukupni troškovi istog . LSTM neuronska mreža (eng. Long short term memory) sa unutrašnjom standardizacijom korišćena je za predviđanje parametra opsluživanja. Analizirane su 24 arhitekture mreža, model sa najboljim prediktivnim performansama je izabran i korišćen u cilji predviđanja parametra opsluživanja.In this paper method for predicting states of toll station system for different number of open toll ramps is developed. The system for toll payment is modeled as single channel queuing model, where one channel presents toll ramp. The novel methodology based on combination of reccurent neural networks and queuing theory is presented. The goal of the methodlogy is to evaluate total costs and probability of traffic intensity at the pay toll stations.. Long short term memory neural network (LSTM) with layer normalization is used as a model for prediction intensity. The 24 different architectures of network are analyzed, and the best one is used as the predictor for intensity of vehicles arrivng time

    Foregut caustic injuries: results of the world society of emergency surgery consensus conference

    Full text link

    Optimal research team composition: data envelopment analysis of Fermilab experiments

    No full text
    We employ data envelopment analysis on a series of experiments performed in Fermilab, one of the major high-energy physics laboratories in the world, in order to test their efficiency (as measured by publication and citation rates) in terms of variations of team size, number of teams per experiment, and completion time. We present the results and analyze them, focusing in particular on inherent connections between quantitative team composition and diversity, and discuss them in relation to other factors contributing to scientific production in a wider sense. Our results concur with the results of other studies across the sciences showing that smaller research teams are more productive, and with the conjecture on curvilinear dependence of team size and efficiency

    When Fairness Meets Consistency in AHP Pairwise Comparisons

    No full text
    We propose introducing fairness constraints to one of the most famous multi-criteria decision-making methods, the analytic hierarchy process (AHP). We offer a solution that guarantees consistency while respecting legally binding fairness constraints in AHP pairwise comparison matrices. Through a synthetic experiment, we generate the comparison matrices of different sizes and ranges/levels of the initial parameters (i.e., consistency ratio and disparate impact). We optimize disparate impact for various combinations of these initial parameters and observed matrix sizes while respecting an acceptable level of consistency and minimizing deviations of pairwise comparison matrices (or their upper triangles) before and after the optimization. We use a metaheuristic genetic algorithm to set the dually motivating problem and operate a discrete optimization procedure (in connection with Saaty’s 9-point scale). The results confirm the initial hypothesis (with 99.5% validity concerning 2800 optimization runs) that achieving fair ranking while respecting consistency in AHP pairwise comparison matrices (when comparing alternatives regarding given criterium) is possible, thus meeting two challenging goals simultaneously. This research contributes to the initiatives directed toward unbiased decision-making, either automated or algorithm-assisted (which is the case covered by this research)
    corecore